Web Survey Bibliography
In the last two decades, Web or Internet surveys have had a profound impact on the survey world. The change has been felt mostly strongly in the market research sector, with many companies switching from telephone surveys or other modes of data collection to online surveys. The academic and public policy/social attitude sectors were a little slower to adopt, being more careful about evaluating the effect of the change on key surveys and trends, and conducting research on how best to design and implement Web surveys. The public sector (i.e., government statistical offices) has been the slowest to embrace Web surveys, in part because the stakes are much higher, both in terms of the precision requirements of the estimates and in terms of the public scrutiny of such data. However, National Statistical Offices (NSOs) are heavily engaged in research and development with regard to Web surveys, mostly notably as part of a mixedmode data collection strategy, or in the establishment survey world, where repeated measurement and quick turnaround are the norm. Along with the uneven progress in the adoption of Web surveys has come a number of concerns about the method, particularly with regard to the representational or inferential aspects of Web surveys. At the same time, a great deal of research has been conducted on the measurement side of Web surveys, developing ways to improve the quality of data collected using this medium. This seminar focuses on these two key elements of Web surveys — inferential issues and measurement issues. Each of these broad areas will be covered in turn in the following sections. The inferential section is largely concerned with methods of sampling for Web surveys, and the associated coverage and nonresponse issues. Different ways in which samples are drawn, using both non-probability and probability-based approaches, are discussed. The assumptions behind the different approaches to inference in Web surveys, the benefits and risks inherent in the different approaches, and the appropriate use of particular approaches to sample selection in Web surveys, are reviewed. The following section then addresses a variety of issues related to the design of Web survey instruments, with a review of the empirical literature and practical recommendations for design to minimize measurement error.
A total survey error framework (see Deming, 1944; Kish, 1965; Groves, 1989) is useful for evaluating the quality or value of a method of data collection such as Web or Internet surveys. In this framework, there are several different sources of error in surveys, and these can be divided into two main groups: errors of non-observation and errors of observation. Errors of nonobservation refer to failures to observe or measure eligible members of the population of interest, and can include coverage errors, sampling errors, and nonresponse errors. Errors of nonobservation are primarily concerned about issues of selection bias. Errors of observation are also called measurement errors (see Biemer et al., 1991; Lessler and Kalsbeeck, 1992). Sources of measurement error include the respondent, the instrument, the mode of data collection and (in interviewer-administered surveys) the interviewer. In addition, processing errors can affect all types of surveys. Errors can also be classified according to whether they affect the variance or bias of survey estimates, both contributing to overall mean square error (MSE) of a survey statistic. A total survey error perspective aims to minimize mean square error for a set of survey statistics, given a set of resources. Thus, cost and time are also important elements in evaluating the quality of a survey. While Web surveys generally are significantly less expensive than other modes of data collection, and are quicker to conduct, there are serious concerns raised about errors of non-observation or selection bias. On the other hand, there is growing evidence that using Web surveys can improve the quality of the data collected (i.e., reduce measurement errors) relative to other modes, depending on how the instruments are designed. Given this framework, we first discuss errors of non-observation or selection bias that may raise concerns about the inferential value of Web surveys, particularly those targeted at the general population. Then in the second part we discuss ways that the design of the Web survey instrument can affect measurement errors.
EUSTAT Homepage (abstract) / (full text)
Web survey bibliography (4086)
- Current Projects at University of Ljubljana; 2011; Lozar Manfreda, K.
- Collecting information with Knowledge technologies; 2011; Foulonneau, M.
- E-dater, Artificial Actors, and German Households; 2011; Hebing, M.
- DIME-SHS; 2011; Lesnard, L.
- Web based Data Collection – A Jungle becoming a Field or a Field becoming a Jungle? ; 2011; Ole Finnemann, N.
- Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys 2011; 2011
- Calibrating Non-Probability Internet Samples with Probability Samples Using Early Adopter Characteristics...; 2011; DiSogra, C., Cobb, C. L., Chan, E., Dennis, J. M.
- How Visual Design Affects the Interpretability of Survey Questions; 2011; Toepoel, V., Dillman, D. A.
- True Longitudinal and Probability-Based Internet Panels: Evidence from the Netherlands; 2011; Das, M., Scherpenzeel, A.
- Web Survey Methodology: Interface Design, Sampling and Statistical Inference; 2011; Couper, M. P.
- Effect of interview modes on measurement of identity; 2011; Nandi, A., Platt, L.
- Maintaining Cross-Sectional Representativeness in a Longitudinal General Population Survey ; 2011; Lynn, P.
- Understanding Society Innovation Panel Wave 3: Results from Methodological Experiments; 2011; Burton, J., Budd, S., Gilbert, E., Jaeckle, A., McFall, S., Uhrig, S. C. N.
- The Effect of a Mixed Mode Wave on Subsequent Attrition in a Panel Survey: Evidence from the Understanding...; 2011; Lynn, P.
- Seeing Through the Eyes of the Respondent: An Eye-tracking Study on Survey Question Comprehension; 2011; Lenzner, A., Kaczmirek, L., Galesic, M.
- Eye Tracking in testing questionnaires: What’s the added value?; 2011; Tries, S.
- Panel Recruitment via Facebook; 2011; Toepoel, V.
- Usability and burden measurement in online forms; 2011; Thomsen, P.
- Dynamic Data Editing in online data collection for the Vacant Positions Survey; 2011; Stax, H.-P.
- Utilizing Web Technology in Business Data Collection: Some Norwegian, Dutch and Danish Experiences; 2011; Snijkers, G., Haraldsen, G., Stax, H.-P.
- Web survey software; 2011; Slavec, A., Berzelak, N., Vehovar, V.
- Disentangling relative mode effects for the web survey mode in the Safety Monitor; 2011; Schouten, B., van de Brakel, J., Buelens, B., Klausch, L. T., van der Laan, J.
- Improving validity in web surveys with hard‐to‐reach targets: Online Respondent Driven Sampling...; 2011; Mavletova, A. M.
- Developing Electronic Questionnaires at Statistics Canada: Experiences and Challenges in a Changing...; 2011; Lawrence, D.
- Experiences with mixed mode mail & web-enquêtes in probability samples with known individuals; 2011; Kalgraff Skjak, K., Kolsrud, K.
- Effects of internet data collection in business surveys – the case of the Dutch SBS; 2011; Giesen, D.
- Ignoring the compatibility of online questionnaires may bias the psychological composition of your sample...; 2011; Funke, F.
- Video enhanced web survey; 2011; Fuchs, M., Kunz, T., Gebhard, F.
- Keeping Up Appearances: Maintaining standards during strategic changes in electronic reporting; 2011; Farrell, E., Hewett, K.
- Respondent engagement: using usability testing; 2011; Dowling, Z.
- Scrolling or paging - it depends; 2011; Blanke, K.
- Research Applications for Mobile Data Collection; 2011; Fawson, B.
- Results from Real-Time Data Collection (RTD) vs. Data from Traditional Panelists: Is it valid to combine...; 2011; Pingitore, G., Witten, S., Walker, A., Seldin, D., Ellrodt, R., Muns, N., Parks, C., Serrato, C.
- Widening the Net: A Comparison of Online Intercept and Access Panel Sampling; 2011; Bakken, D. G., Nawani, R.
- Making it fit: how survey technology providers are responding to the challenges of handling web surveys...; 2011; Macer, T.
- Probably the Best Bias in the World?; 2011; Dent, T.
- Optimus Modus: Comparing interviewing modes for visitor surveys; 2011; Stanley, N., Jenkins, S.
- The development of the KubeMatrix™ as a mobile app for Market Research Online Communities; 2011; Birks, D. F., Wilson, De.
- Online Research – Game On!: A look at how gaming techniques can transform your online research; 2011; Puleston, J.
- Engagement, Consistency, Reach – why the Technology Landscape Precludes All Three; 2011; Johnson, A., Rolfe, G.
- The use of paradata to improve data collection at Statistics Canada: Empirical results and research; 2011; Gambino, J., Wrighte, D.
- Medium Node: NSF Census Research Network; 2011; McCutcheon, A. L., Belli, R. F., Olson, K., Smyth, J. D., Soh, L.-K.
- A new online building survey system; 2011; Wang, Yic.
- A Comparison of Internet-Based Participant Recruitment Methods: Engaging the Hidden Population of Cannabis...; 2011; Temple, E. C., Brown, R. F.
- The German Access Panel and the Impact of Response Propensities; 2011; Amarov, B., Enderle, T., Muennich, R., Rendtel, U., Zins, S.
- Web Survey Process within the Concept of eSocial Sciences; 2011; Vehovar, V.
- Can biomarkers be collected in an Internet survey? A pilot study in the LISS panel; 2011; Avendano, M., Mackenbach, J., Scherpenzeel, A.
- Innovations in survey sampling design: Discussion of three contributions presented at the U.S. Census...; 2011; Opsomer, J.
- A Bayesian analysis of small area probabilities under a constraint; 2011; Nandram, B., Sayit, H.
- Adaptive network and spatial sampling; 2011; Thompson, S. K.